首页> 外文OA文献 >Beyond Active Noun Tagging: Modeling Contextual Interactions for Multi-Class Active Learning
【2h】

Beyond Active Noun Tagging: Modeling Contextual Interactions for Multi-Class Active Learning

机译:超越主动名词标记:为多类主动学习建模上下文交互

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We present an active learning framework to simultaneously learn appearance and contextual models for scene understanding tasks (multi-class classification). Existing multi-class active learning approaches have focused on utilizing classification uncertainty of regions to select the most ambiguous region for labeling. These approaches, however, ignore the contextual interactions between different regions of the image and the fact that knowing the label for one region provides information about the labels of other regions. For example, the knowledge of a region being sea is informative about regions satisfying the “on” relationship with respect to it, since they are highly likely to be boats. We explicitly model the contextual interactions between regions and select the question which leads to the maximum reduction in the combined entropy of all the regions in the image (image entropy). We also introduce a new methodology of posing labeling questions, mimicking the way humans actively learn about their environment. In these questions, we utilize the regions linked to a concept with high confidence as anchors, to pose questions about the uncertain regions. For example, if we can recognize water in an image then we can use the region associated with water as an anchor to pose questions such as “what is above water?”. Our active learning framework also introduces questions which help in actively learning contextual concepts. For example, our approach asks the annotator: “What is the relationship between boat and water?” and utilizes the answer to reduce the image entropies throughout the training dataset and obtain more relevant training examples for appearance models.
机译:我们提出了一个主动的学习框架,可以同时学习外观和上下文模型以进行场景理解任务(多类分类)。现有的多类主动学习方法已经集中在利用区域的分类不确定性来选择最模糊的区域进行标记。然而,这些方法忽略了图像的不同区域之间的上下文交互以及以下事实:知道一个区域的标签会提供有关其他区域的标签的信息。例如,关于一个海洋区域的知识对于满足相对于其满足“开启”关系的区域是有益的,因为它们很可能是船。我们显式地建模区域之间的上下文交互,并选择一个问题,该问题导致图像中所有区域的组合熵(图像熵)最大减少。我们还介绍了提出标签问题的新方法,以模仿人类积极了解其环境的方式。在这些问题中,我们利用与高可信度概念相关的区域作为锚点,提出有关不确定区域的问题。例如,如果我们可以在图像中识别出水,则可以使用与水关联的区域作为锚点来提出诸如“水面之上是什么?”之类的问题。我们的主动学习框架还引入了有助于主动学习情境概念的问题。例如,我们的方法问注释者:“船与水之间是什么关系?”并利用答案来减少整个训练数据集中的图像熵,并获得外观模型更相关的训练示例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号